Current Issue : October - December Volume : 2019 Issue Number : 4 Articles : 5 Articles
Online Social Networks (OSNs) have found widespread applications in every area of our\nlife. A large number of people have signed up to OSN for different purposes, including to meet\nold friends, to choose a given company, to identify expert users about a given topic, producing\na large number of social connections. These aspects have led to the birth of a new generation\nof OSNs, called Multimedia Social Networks (MSNs), in which user-generated content plays a key\nrole to enable interactions among users. In this work, we propose a novel expert-finding technique\nexploiting a hypergraph-based data model for MSNs. In particular, some user-ranking measures,\nobtained considering only particular useful hyperpaths, have been profitably used to evaluate the\nrelated expertness degree with respect to a given social topic. Several experiments on Last.FM have\nbeen performed to evaluate the proposed approachâ??s effectiveness, encouraging future work in this\ndirection for supporting several applications such as multimedia recommendation, influence analysis,\nand so on....
Within the strongly regulated avionic engineering field, conventional graphical desktop hardware and software application\nprogramming interface (API) cannot be used because they do not conform to the avionic certification standards. We observe the\nneed for better avionic graphical hardware, but system engineers lack system design tools related to graphical hardware. .e\nendorsement of an optimal hardware architecture by estimating the performance of a graphical software, when a stable rendering\nengine does not yet exist, represents a major challenge. As proven by previous hardware emulation tools, there is also a potential\nfor development cost reduction, by enabling developers to have a first estimation of the performance of its graphical engine early\nin the development cycle. In this paper, we propose to replace expensive development platforms by predictive software running on\na desktop computer. More precisely, we present a system design tool that helps predict the rendering performance of graphical\nhardware based on the OpenGL Safety Critical API. First, we create nonparametric models of the underlying hardware, with\nmachine learning, by analyzing the instantaneous frames per second (FPS) of the rendering of a synthetic 3D scene and by drawing\nmultiple times with various characteristics that are typically found in synthetic vision applications. .e number of characteristic\ncombinations used during this supervised training phase is a subset of all possible combinations, but performance predictions can\nbe arbitrarily extrapolated. To validate our models, we render an industrial scene with characteristic combinations not used during\nthe training phase and we compare the predictions to those real values. We find a median prediction error of less than 4 FPS....
The process of image retrieval presents an interesting tool for different domains related\nto computer vision such as multimedia retrieval, pattern recognition, medical imaging, video\nsurveillance and movements analysis. Visual characteristics of images such as color, texture and shape\nare used to identify the content of images. However, the retrieving process becomes very challenging\ndue to the hard management of large databases in terms of storage, computation complexity, temporal\nperformance and similarity representation. In this paper, we propose a cloud-based platform in which\nwe integrate several features extraction algorithms used for content-based image retrieval (CBIR)\nsystems. Moreover, we propose an efficient combination of SIFT and SURF descriptors that allowed\nto extract and match image features and hence improve the process of image retrieval. The proposed\nalgorithms have been implemented on the CPU and also adapted to fully exploit the power of GPUs.\nOur platform is presented with a responsive web solution that offers for users the possibility to\nexploit, test and evaluate image retrieval methods. The platform offers to users a simple-to-use access\nfor different algorithms such as SIFT, SURF descriptors without the need to setup the environment\nor install anything while spending minimal efforts on preprocessing and configuring. On the other\nhand, our cloud-based CPU and GPU implementations are scalable, which means that they can be\nused even with large database of multimedia documents. The obtained results showed: 1. Precision\nimprovement in terms of recall and precision; 2. Performance improvement in terms of computation\ntime as a result of exploiting GPUs in parallel; 3. Reduction of energy consumption....
A real-time mobile content player was developed that can recognize and reflect emotions in real time using a smartphone. To\ndetermine effective awareness, a photoplethysmogram (PPG), which is a biological signal, was measured to recognize emotional\nchanges in users presented with content intended to induce an emotional response. To avoid the need for a separate sensor to\nmeasure the PPG, PPG signals were extracted from the red (R) values of images acquired by the rear camera of a smartphone. To\nreflect an emotion, the saturation (S) and brightness (V) levels, which are related to the ambience of a content, are changed to\nreflect the emotional changes of the user within the content itself in real time. Arousal- and relaxation-inducing scenarios were\nconducted to validate the effectiveness. The sample t-test results show that the average peak-to-peak interval (PPI), which is the\ntime interval between the peaks of PPG signals, was significantly low when viewing the content under the arousal-inducing\nscenario as compared to when watching regular content, and it was determined that the emotion of the user was led to a state of\narousal. Ten university students (five males and five females) participated in the experiment. The users had no cardiac disease and\nwere asked not to drink or smoke before the experiment. The average PPI was significantly higher when the content was viewed in\nthe relaxation-inducing scenario compared to regular content, and it was determined that the emotion of the user was induced to a\nstate of relaxation.The designed emotional content player was confirmed to be an interactive system, in which the video content\nand user concurrently affect each other through the system....
We propose a scale-invariant deep neural network model based on wavelets for single\nimage super-resolution (SISR). The wavelet approximation images and their corresponding wavelet\nsub-bands across all predefined scale factors are combined to form a big training data set.\nThen, mappings are determined between the wavelet sub-band images and their corresponding\napproximation images. Finally, the gradient clipping process is used to boost the training speed of\nthe algorithm. Furthermore, stationary wavelet transform (SWT) is used instead of a discrete wavelet\ntransform (DWT), due to its up-scaling property. In this way, we can preserve more information\nabout the images. In the proposed model, the high-resolution image is recovered with detailed\nfeatures, due to redundancy (across the scale) property of wavelets. Experimental results show that\nthe proposed model outperforms state-of-the algorithms in terms of peak signal-to-noise ratio (PSNR)\nand structural similarity index measure (SSIM)....
Loading....